7 research outputs found
Data-driven quantitative photoacoustic tomography
Spatial information about the 3D distribution of blood oxygen saturation (sO2) in
vivo is of clinical interest as it encodes important physiological information about
tissue health/pathology. Photoacoustic tomography (PAT) is a biomedical imaging
modality that, in principle, can be used to acquire this information. Images are
formed by illuminating the sample with a laser pulse where, after multiple scattering events, the optical energy is absorbed. A subsequent rise in temperature induces
an increase in pressure (the photoacoustic initial pressure p0) that propagates to the
sample surface as an acoustic wave. These acoustic waves are detected as pressure
time series by sensor arrays and used to reconstruct images of sample’s p0 distribution. This encodes information about the sample’s absorption distribution, and can
be used to estimate sO2. However, an ill-posed nonlinear inverse problem stands in
the way of acquiring estimates in vivo. Current approaches to solving this problem
fall short of being widely and successfully applied to in vivo tissues due to their
reliance on simplifying assumptions about the tissue, prior knowledge of its optical
properties, or the formulation of a forward model accurately describing image acquisition with a specific imaging system. Here, we investigate the use of data-driven
approaches (deep convolutional networks) to solve this problem. Networks only require a dataset of examples to learn a mapping from PAT data to images of the sO2
distribution. We show the results of training a 3D convolutional network to estimate
the 3D sO2 distribution within model tissues from 3D multiwavelength simulated
images. However, acquiring a realistic training set to enable successful in vivo
application is non-trivial given the challenges associated with estimating ground
truth sO2 distributions and the current limitations of simulating training data. We suggest/test several methods to 1) acquire more realistic training data or 2) improve
network performance in the absence of adequate quantities of realistic training data.
For 1) we describe how training data may be acquired from an organ perfusion system and outline a possible design. Separately, we describe how training data may
be generated synthetically using a variant of generative adversarial networks called
ambientGANs. For 2), we show how the accuracy of networks trained with limited
training data can be improved with self-training. We also demonstrate how the domain gap between training and test sets can be minimised with unsupervised domain
adaption to improve quantification accuracy. Overall, this thesis clarifies the advantages of data-driven approaches, and suggests concrete steps towards overcoming
the challenges with in vivo application
Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss
Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal model for human hearing, while presenting a large database of speech and noise sounds. We first used manifold learning to identify the neural subspace in which speech is encoded and found that it is low-dimensional and that the dynamics within it are profoundly distorted by hearing loss. We then trained a deep neural network (DNN) to replicate the neural coding of speech with and without hearing loss and analyzed the underlying network dynamics. We found that hearing loss primarily impacts spectral processing, creating nonlinear distortions in cross-frequency interactions that result in a hypersensitivity to background noise that persists even after amplification with a hearing aid. Our results identify a new focus for efforts to design improved hearing aids and demonstrate the power of DNNs as a tool for the study of central brain structures
Toward accurate quantitative photoacoustic imaging:learning vascular blood oxygen saturation in three dimensions
Abstract
Significance:
Two-dimensional (2-D) fully convolutional neural networks have been shown capable of producing maps of sOâ‚‚ from 2-D simulated images of simple tissue models. However, their potential to produce accurate estimates in vivo is uncertain as they are limited by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D), and they have not been tested with realistic images.
Aim:
To demonstrate the capability of deep neural networks to process whole 3-D images and output 3-D maps of vascular sOâ‚‚ from realistic tissue models/images.
Approach:
Two separate fully convolutional neural networks were trained to produce 3-D maps of vascular blood oxygen saturation and vessel positions from multiwavelength simulated images of tissue models.
Results:
The mean of the absolute difference between the true mean vessel sOâ‚‚ and the network output for 40 examples was 4.4% and the standard deviation was 4.5%.
Conclusions:
3-D fully convolutional networks were shown capable of producing accurate sOâ‚‚ maps using the full extent of spatial information contained within 3-D images generated under conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some of the confounding effects present in real images such as limited-view artifacts and have the potential to produce accurate estimates in vivo
Experimental evaluation of a 3-D fully convolutional network for learning blood oxygenation saturation using photoacoustic imaging
A 3D convolutional neural network which was trained using simulated data only has been assessed for the purpose of estimating surrogate sO2 from multiwavelength photoacoustic images of experimental phantoms with well-defined ground truths
Experimental evaluation of a 3-D fully convolutional network for learning blood oxygenation saturation using photoacoustic imaging
A 3D convolutional neural network which was trained using simulated data only has been assessed for the purpose of estimating surrogate sO2 from multiwavelength photoacoustic images of experimental phantoms with well-defined ground truths